neural mmo
Results of the NeurIPS 2023 Neural MMO Competition on Multi-task Reinforcement Learning
Suárez, Joseph, Choe, Kyoung Whan, Bloomin, David, Gao, Jianming, Li, Yunkun, Feng, Yao, Pola, Saidinesh, Zhang, Kun, Zhu, Yonghui, Pinnaparaju, Nikhil, Li, Hao Xiang, Kanna, Nishaanth, Scott, Daniel, Sullivan, Ryan, Shuman, Rose S., de Alcântara, Lucas, Bradley, Herbie, You, Kirsty, Wu, Bo, Jiang, Yuhao, Li, Qimai, Chen, Jiaxin, Castricato, Louis, Zhu, Xiaolong, Isola, Phillip
We present the results of the NeurIPS 2023 Neural MMO Competition, which attracted over 200 participants and submissions. Participants trained goal-conditional policies that generalize to tasks, maps, and opponents never seen during training. The top solution achieved a score 4x higher than our baseline within 8 hours of training on a single 4090 GPU. We open-source everything relating to Neural MMO and the competition under the MIT license, including the policy weights and training code for our baseline and for the top submissions.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Maryland > Prince George's County > College Park (0.04)
- Europe > Portugal > Braga > Braga (0.04)
Massively Multiagent Minigames for Training Generalist Agents
Choe, Kyoung Whan, Sullivan, Ryan, Suárez, Joseph
Meta MMO is built on top of Neural MMO, a massively multiagent environment that has been the subject of two previous NeurIPS competitions. Our work expands Neural MMO with several computationally efficient minigames. We explore generalization across Meta MMO by learning to play several minigames with a single set of weights. We release the environment, baselines, and training code under the MIT license. We hope that Meta MMO will spur additional progress on Neural MMO and, more generally, will serve as a useful benchmark for many-agent generalization.
- Europe > Portugal > Braga > Braga (0.04)
- Africa > Ethiopia > Addis Ababa > Addis Ababa (0.04)
Benchmarking Robustness and Generalization in Multi-Agent Systems: A Case Study on Neural MMO
Chen, Yangkun, Suarez, Joseph, Zhang, Junjie, Yu, Chenghui, Wu, Bo, Chen, Hanmo, Zhu, Hengman, Du, Rui, Qian, Shanliang, Liu, Shuai, Hong, Weijun, He, Jinke, Zhang, Yibing, Zhao, Liang, Zhu, Clare, Togelius, Julian, Mohanty, Sharada, Chen, Jiaxin, Li, Xiu, Zhu, Xiaolong, Isola, Phillip
We present the results of the second Neural MMO challenge, hosted at IJCAI 2022, which received 1600+ submissions. This competition targets robustness and generalization in multi-agent systems: participants train teams of agents to complete a multi-task objective against opponents not seen during training. The competition combines relatively complex environment design with large numbers of agents in the environment. The top submissions demonstrate strong success on this task using mostly standard reinforcement learning (RL) methods combined with domain-specific engineering. We summarize the competition design and results and suggest that, as an academic community, competitions may be a powerful approach to solving hard problems and establishing a solid benchmark for algorithms. We will open-source our benchmark including the environment wrapper, baselines, a visualization tool, and selected policies for further research.
The Neural MMO Platform for Massively Multiagent Research
Suarez, Joseph, Du, Yilun, Zhu, Clare, Mordatch, Igor, Isola, Phillip
Neural MMO is a computationally accessible research platform that combines large agent populations, long time horizons, open-ended tasks, and modular game systems. Existing environments feature subsets of these properties, but Neural MMO is the first to combine them all. We present Neural MMO as free and open source software with active support, ongoing development, documentation, and additional training, logging, and visualization tools to help users adapt to this new setting. Initial baselines on the platform demonstrate that agents trained in large populations explore more and learn a progression of skills. We raise other more difficult problems such as many-team cooperation as open research questions which Neural MMO is well-suited to answer. Finally, we discuss current limitations of the platform, potential mitigations, and plans for continued development.
- Europe > Sweden > Skåne County > Malmö (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Leisure & Entertainment > Games > Computer Games (1.00)
- Government (0.69)
- Education (0.68)
Neural MMO v1.3: A Massively Multiagent Game Environment for Training and Evaluating Neural Networks
Suarez, Joseph, Du, Yilun, Mordach, Igor, Isola, Phillip
Progress in multiagent intelligence research is fundamentally limited by the number and quality of environments available for study. In recent years, simulated games have become a dominant research platform within reinforcement learning, in part due to their accessibility and interpretability. Previous works have targeted and demonstrated success on arcade, first person shooter (FPS), real-time strategy (RTS), and massive online battle arena (MOBA) games. Our work considers massively multiplayer online role-playing games (MMORPGs or MMOs), which capture several complexities of real-world learning that are not well modeled by any other game genre. We present Neural MMO, a massively multiagent game environment inspired by MMOs and discuss our progress on two more general challenges in multiagent systems engineering for AI research: distributed infrastructure and game IO. We further demonstrate that standard policy gradient methods and simple baseline models can learn interesting emergent exploration and specialization behaviors in this setting.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > Santa Clara County > Mountain View (0.04)
- Asia > Middle East > Jordan (0.04)
Action Semantics Network: Considering the Effects of Actions in Multiagent Systems
Wang, Weixun, Liu, Tianpei Yang Yong, Hao, Jianye, Hao, Xiaotian, Hu, Yujing, Chen, Yingfeng, Fan, Changjie, Gao, Yang
In multiagent systems (MASs), each agent makes individual decisions but all of them contribute globally to the system evolution. Learning in MASs is difficult since the selection of actions must take place in the presence of other co-learning agents. Moreover, the environmental stochasticity and uncertainties increase exponentially with the number of agents. A number of previous works borrow various multiagent coordination mechanisms into deep multiagent learning architecture to facilitate multiagent coordination. However, none of them explicitly consider action semantics between agents. In this paper, we propose a novel network architecture, named Action Semantics Network (ASN), that explicitly represents such action semantics between agents. ASN characterizes different actions' influence on other agents using neural networks based on the action semantics between agents. ASN can be easily combined with existing deep reinforcement learning (DRL) algorithms to boost their performance. Experimental results on StarCraft II and Neural MMO show ASN significantly improves the performance of state-of-the-art DRL approaches compared with a number of network architectures.
- Asia > China > Tianjin Province > Tianjin (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
Neural MMO - A Massively Multiagent Game Environment
Our platform supports a large, variable number of agents within a persistent and open-ended task. The inclusion of many agents and species leads to better exploration, divergent niche formation, and greater overall competence. In recent years, multiagent settings have become an effective platform for deep reinforcement learning research. Despite this progress, there are still two main challenges for multiagent reinforcement learning. We need to create open-ended tasks with a high complexity ceiling: current environments are either complex but too narrow or open-ended but too simple.
Researchers Are Training AI to Survive In This MMO
Neural MMO is a new massively multiplayer online game, but humans aren't invited--only artificial intelligence can play. In the game, AI agents spawn into an open world and need to gather resources like food and water to survive. Along the way, they'll encounter rival agents which they can avoid or fight for dominance. It's a harsh world, designed by its creators to prompt the AI agents to develop strategies that satisfy a task that is both open-ended and highly complex: survival over a lifetime. OpenAI researchers Joseph Suarez, Yilun Du, Phillip Isola, and Igor Mordatch designed Neural MMO and released its code via GitHub on Monday.
OpenAI launches Neural MMO, a massive reinforcement learning simulator
Artificial intelligence that's beastly at World of Warcraft might not lie too far into the distant future, if OpenAI has its way. The San Francisco research nonprofit today released Neural MMO, a "massively multiagent" virtual training ground that plops agents in the middle of an RPG-like world -- one complete with a resource collection mechanic and player versus player combat. "The game genre of Massively Multiplayer Online Games (MMOs) simulates a large ecosystem of a variable number of players competing in persistent and extensive environments," OpenAI wrote in a blog post. "The inclusion of many agents and species leads to better exploration, divergent niche formation, and greater overall competence." AI agents spawn randomly in Neural MMO environments, which contain automatically generated tile maps of a prespecified size. Some tiles are traversable, like "forest" (which bears food) and "grass," while others aren't (such as water and stone).
- North America > United States > California > San Francisco County > San Francisco (0.26)
- Europe > Norway (0.06)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.89)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.89)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.72)
Neural MMO: A Massively Multiagent Game Environment for Training and Evaluating Intelligent Agents
Suarez, Joseph, Du, Yilun, Isola, Phillip, Mordatch, Igor
The emergence of complex life on Earth is often attributed to the arms race that ensued from a huge number of organisms all competing for finite resources. We present an artificial intelligence research environment, inspired by the human game genre of MMORPGs (Massively Multiplayer Online Role-Playing Games, a.k.a. MMOs), that aims to simulate this setting in microcosm. As with MMORPGs and the real world alike, our environment is persistent and supports a large and variable number of agents. Our environment is well suited to the study of large-scale multiagent interaction: it requires that agents learn robust combat and navigation policies in the presence of large populations attempting to do the same. Baseline experiments reveal that population size magnifies and incentivizes the development of skillful behaviors and results in agents that outcompete agents trained in smaller populations. We further show that the policies of agents with unshared weights naturally diverge to fill different niches in order to avoid competition.